-
Notifications
You must be signed in to change notification settings - Fork 555
MedLink Bounty #728
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
MedLink Bounty #728
Conversation
jhnwu3
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'll probably add more comments as I have more time to dig deeper into this, but nice first attempt at actually a pretty hard bounty.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Some quick thoughts that:
- Can we move the medlink task into the pyhealth.tasks module too? I actually think it'd be really helpful also to further have detailed documentation surrounding the query/document identifiers. It'd be good to link it up with the original paper's task of mapping records to a master known patient record.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It would also be nice to have it in the docs/ as that'll actually be a pretty nice to have for anyone working on record linkage problems.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can we also try to see if we can't build new processors here to pass to the MedLink model.
Actually, I think the sequence processors should have built-in vocabularies here. But, it would be nice to update the EmbeddingModel to better support things like initialized Glove vectors or just use randomly initailized embeddings for now. This way medlink can be better integrated with the rest of PyHealth, and I think it'd be a nice lesson in replicating the original implementation. (A lot of the techniques are pretty relevant to clinical predictive modeling, so I think it's a good learning exercise).
Example of a PR working with the processors instead of the previous old PyHealth tokenizer approach here: https://github.com/sunlabuiuc/PyHealth/pull/610/files
Glove vectors from the original implementation: https://github.com/zzachw/MedLink here.
SummaryUpdated with content that completes the Medlink bounty via adding the processor native implementation as requested (integrated w/ Pyhealth 2.x processors), unit tests with synthetic data, and an end to end MIMIC-III demo. It also has optional pretrained embedding initialization (glove style vectors) to better match the original Medlink implementation. Changes1) MedLink model implementation (processor native)
2) Pretrained embeddings support
3) Unit tests
4) End-to-end example notebook
Bounty requirement coverage from feedback
How to run testsFrom repo root: pip install -e .
pytest -q tests/core/test_medlink.py |
jhnwu3
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Very close! Thanks for the hardwork!
jhnwu3
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can we make sure our documentation (docstrings) clearly documents what the exact inputs and outputs are for the model?
|
The tests seem to initialize sampledataset w/ samples parameter, but its expecting a schema file. So I added a helper to build the dataset w/ in memory samples. Should pass tests now |
jhnwu3
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's cool that it runs, but I think there are couple things that i'm concerned about here, so please do comment on it if I'm misunderstanding some things here:
- The EmbeddingModel changes don't seem to appear in the MedLink model itself. It would be good if we could unify them or at least otherwise, we can separate them from the EmbeddingModel, and simply assume that Glove embeddings are required for an accurate MedLink model.
- The docstrings need to be updated with the new API using a create_sample_dataset since SampleDataset no longer accepts a samples list anymore due to our updated streaming backend
- There seems to be a confusion around tokenizers and where to place them. For reference, you can use a Processor class if you want a specific List[str] format or if you need it to be in embedding format, you can also define a GloveProcessor. There's a couple ways of going about this. Happy to discuss more, but I think it'd be good to separate the model from the tokenizer/what we call a processor in PyHealth
- The TaskClass upon further inspection seems to have schemas that don't follow what the docstrings say.
| @@ -1,5 +1,5 @@ | |||
| from typing import Set | |||
| from base import BaseTestCase | |||
| from tests.base import BaseTestCase | |||
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't really understand this change? Is it just because of the Python versioning? Doesn't seem to lead to issues though.
| >>> samples = [{"patient_id": "1", "admissions": ["ICD9_430", "ICD9_401"]}, ...] | ||
| >>> input_schema = {"admissions": "code"} | ||
| >>> output_schema = {"label": "binary"} | ||
| >>> dataset = SampleDataset(path="/some/path", samples=samples, input_schema=input_schema, output_schema=output_schema) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The SampleDataset example here needs to use the create_sample_dataset() function
| # Set feature_keys manually since BaseModel extracts it from dataset.input_schema | ||
| # but MedLink needs to use the provided feature_keys | ||
| self.feature_keys = feature_keys | ||
| self.feature_key = feature_keys[0] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why the first one specifically? If need be, we can always define a medlink processor to make certain assumptions on the inputs.
For reference on processors, see here: https://pyhealth.readthedocs.io/en/latest/api/processors.html
Essentially, a tokenizer is a type of processor in our framework.
| self.criterion = nn.BCEWithLogitsLoss() | ||
|
|
||
| def _encode_tokens(self, seqs: List[List[str]], device: torch.device): | ||
| """ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For instance, this here could be a processor call()
Edit: I just realized the forward() call here doesn't use _encode_tokens?
| """ | ||
|
|
||
| task_name = "patient_linkage_mimic3" | ||
| input_schema = {"conditions": "sequence", "d_conditions": "sequence"} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If you construct a MedLink processor, we can also change our schemas to be something like "q_conditions" : "medlink" (you can also use the class name itself like "q_conditions" : MedLinkProcessor() if that's easier to map in code) and "d_conditions" : "medlink" if that helps since it seems like the model does expect a sequence, but it encodes it in a different way with different Glove embeddings rather than the current EmbeddingModel approach?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You can also make a List[str] processor if you want to assume strings or something of that sort. Right now, the schemas don't seem to mean much at all, and thus it's just code that adds to the confusion of what is being inputted and outputted here.
For instance, the doc strings don't match up with the code here.
| self.tokenizer = tokenizer | ||
| self.vocab_size = tokenizer.get_vocabulary_size() | ||
|
|
||
| self.embedding = nn.Embedding( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't see how this links up with the EmbeddingModel changes to use Glove vectors?
| >>> from pyhealth.datasets import SampleDataset | ||
| >>> from pyhealth.models import MedLink | ||
| >>> samples = [{"patient_id": "1", "admissions": ["ICD9_430", "ICD9_401"]}, ...] | ||
| >>> input_schema = {"admissions": "code"} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think we can simply assume that only certain tasks are compatible with the model, so we don't have to explicitly specify a feature_keys argument here that makes things a little more confusing and more opaque to the user.
PR for MedLink bounty
Tests:
To run the MedLink unit tests, from the project root run:
pytest tests/core/test_medlink.py (locally, 3 passed & 1 warning)
Model Implementation:
Additions to "pyhealth/models/medlink/model.py":
Other changes:
Added examples/medlink_mimic3.ipynb, a runnable notebook that:
Loads the MIMIC-III demo dataset via MIMIC3Dataset.
Defines a patient linkage task to generate query–candidate pairs.
Uses the MedLink helpers to build IR-format data and PyTorch dataloaders.
Trains and evaluates MedLink and reports ranking metrics.
Locally ran:
examples/medlink_mimic3.ipynb runs end-to-end on the MIMIC-III demo dataset.
The notebook includes a note on how to run the MedLink unit tests from project root.
Files to review:
pyhealth/datasets/sample_dataset.py – SampleDataset.get_all_tokens helper for vocabulary construction.
pyhealth/models/medlink/model.py – core MedLink model implementation.
pyhealth/models/medlink/bm25.py – BM25Okapi implementation used in the retrieval pipeline.
pyhealth/models/medlink/utils.py – IR-format conversion, TVT split, candidate generation, dataloaders.
pyhealth/models/init.py – export of MedLink.
tests/core/test_medlink.py – synthetic unit tests for MedLink (forward pass, encoders, score shapes).
examples/medlink_mimic3.ipynb – Jupyter notebook for training and evaluating MedLink on the MIMIC-III demo dataset.